756 research outputs found

    Sybil attacks against mobile users: friends and foes to the rescue

    Get PDF
    Collaborative applications for co-located mobile users can be severely disrupted by a sybil attack to the point of being unusable. Existing decentralized defences have largely been designed for peer-to-peer networks but not for mobile networks. That is why we propose a new decentralized defence for portable devices and call it MobID. The idea is that a device manages two small networks in which it stores information about the devices it meets: its network of friends contains honest devices, and its network of foes contains suspicious devices. By reasoning on these two networks, the device is then able to determine whether an unknown individual is carrying out a sybil attack or not. We evaluate the extent to which MobID reduces the number of interactions with sybil attackers and consequently enables collaborative applications.We do so using real mobility and social network data. We also assess computational and communication costs of MobID on mobile phones

    StakeNet: using social networks to analyse the stakeholders of large-scale software projects

    Get PDF
    Many software projects fail because they overlook stakeholders or involve the wrong representatives of significant groups. Unfortunately, existing methods in stakeholder analysis are likely to omit stakeholders, and consider all stakeholders as equally influential. To identify and prioritise stakeholders, we have developed StakeNet, which consists of three main steps: identify stakeholders and ask them to recommend other stakeholders and stakeholder roles, build a social network whose nodes are stakeholders and links are recommendations, and prioritise stakeholders using a variety of social network measures. To evaluate StakeNet, we conducted one of the first empirical studies of requirements stakeholders on a software project for a 30,000-user system. Using the data collected from surveying and interviewing 68 stakeholders, we show that StakeNet identifies stakeholders and their roles with high recall, and accurately prioritises them. StakeNet uncovers a critical stakeholder role overlooked in the project, whose omission significantly impacted project success

    TRULLO - local trust bootstrapping for ubiquitous devices

    Get PDF
    Handheld devices have become sufficiently powerful that it is easy to create, disseminate, and access digital content (e.g., photos, videos) using them. The volume of such content is growing rapidly and, from the perspective of each user, selecting relevant content is key. To this end, each user may run a trust model - a software agent that keeps track of who disseminates content that its user finds relevant. This agent does so by assigning an initial trust value to each producer for a specific category (context); then, whenever it receives new content, the agent rates the content and accordingly updates its trust value for the producer in the content category. However, a problem with such an approach is that, as the number of content categories increases, so does the number of trust values to be initially set. This paper focuses on how to effectively set initial trust values. The most sophisticated of the current solutions employ predefined context ontologies, using which initial trust in a given context is set based on that already held in similar contexts. However, universally accepted (and time invariant) ontologies are rarely found in practice. For this reason, we propose a mechanism called TRULLO (TRUst bootstrapping by Latently Lifting cOntext) that assigns initial trust values based only on local information (on the ratings of its user’s past experiences) and that, as such, does not rely on third-party recommendations. We evaluate the effectiveness of TRULLO by simulating its use in an informal antique market setting. We also evaluate the computational cost of a J2ME implementation of TRULLO on a mobile phone

    StakeSource: harnessing the power of crowdsourcing and social networks in stakeholder analysis

    Get PDF
    Projects often fail because they overlook stakeholders. Unfortunately, existing stakeholder analysis tools only capture stakeholders' information, relying on experts to manually identify them. StakeSource is a web-based tool that automates stakeholder analysis. It "crowdsources" the stakeholders themselves for recommendations about other stakeholders and aggregates their answers using social network analysis

    Trust models for mobile content-sharing applications

    Get PDF
    Using recent technologies such as Bluetooth, mobile users can share digital content (e.g., photos, videos) with other users in proximity. However, to reduce the cognitive load on mobile users, it is important that only appropriate content is stored and presented to them. This dissertation examines the feasibility of having mobile users filter out irrelevant content by running trust models. A trust model is a piece of software that keeps track of which devices are trusted (for sending quality content) and which are not. Unfortunately, existing trust models are not fit for purpose. Specifically, they lack the ability to: (1) reason about ratings other than binary ratings in a formal way; (2) rely on the trustworthiness of stored third-party recommendations; (3) aggregate recommendations to make accurate predictions of whom to trust; and (4) reason across categories without resorting to ontologies that are shared by all users in the system. We overcome these shortcomings by designing and evaluating algorithms and protocols with which portable devices are able automatically to maintain information about the reputability of sources of content and to learn from each other’s recommendations. More specifically, our contributions are: 1. An algorithm that formally reasons on generic (not necessarily binary) ratings using Bayes’ theorem. 2. A set of security protocols with which devices store ratings in (local) tamper-evident tables and are able to check the integrity of those tables through a gossiping protocol. 3. An algorithm that arranges recommendations in a “Web of Trust” and that makes predictions of trustworthiness that are more accurate than existing approaches by using graph-based learning. 4. An algorithm that learns the similarity between any two categories by extracting similarities between the two categories’ ratings rather than by requiring a universal ontology. It does so automatically by using Singular Value Decomposition. We combine these algorithms and protocols and, using real-world mobility and social network data, we evaluate the effectiveness of our proposal in allowing mobile users to select reputable sources of content. We further examine the feasibility of implementing our proposal on current mobile phones by examining the storage and computational overhead it entails. We conclude that our proposal is both feasible to implement and performs better across a range of parameters than a number of current alternatives

    Ocular hypertension in myopia: analysis of contrast sensitivity

    Get PDF
    Purpose: we evaluated the evolution of contrast sensitivity reduction in patients affected by ocular hypertension and glaucoma, with low to moderate myopia. We also evaluated the relationship between contrast sensitivity and mean deviation of visual field. Material and methods: 158 patients (316 eyes), aged between 38 and 57 years old, were enrolled and divided into 4 groups: emmetropes, myopes, myopes with ocular hypertension (IOP≥21 ±2 mmHg), myopes with glaucoma. All patients underwent anamnestic and complete eye evaluation, tonometric curves with Goldmann’s applanation tonometer, cup/disc ratio evaluation, gonioscopy by Goldmann’s three-mirrors lens, automated perimetry (Humphrey 30-2 full-threshold test) and contrast sensitivity evaluation by Pelli-Robson charts. A contrast sensitivity under 1,8 Logarithm of the Minimum Angle of Resolution (LogMAR) was considered abnormal. Results: contrast sensitivity was reduced in the group of myopes with ocular hypertension (1,788 LogMAR) and in the group of myopes with glaucoma (1,743 LogMAR), while it was preserved in the group of myopes (2,069 LogMAR) and in the group of emmetropes (1,990 LogMAR). We also found a strong correlation between contrast sensitivity reduction and mean deviation of visual fields in myopes with glaucoma (coefficient relation = 0.86) and in myopes with ocular hypertension (coefficient relation = 0.78). Conclusions: the contrast sensitivity assessment performed by the Pelli-Robson test should be performed in all patients with middle-grade myopia, ocular hypertension and optic disc suspected for glaucoma, as it may be useful in the early diagnosis of the disease. Introduction Contrast can be defined as the ability of the eye to discriminate differences in luminance between the stimulus and the background. The sensitivity to contrast is represented by the inverse of the minimal contrast necessary to make an object visible; the lower the contrast the greater the sensitivity, and the other way around. Contrast sensitivity is a fundamental aspect of vision together with visual acuity: the latter defines the smallest spatial detail that the subject manages to discriminate under optimal conditions, but it only provides information about the size of the stimulus that the eye is capable to perceive; instead, the evaluation of contrast sensitivity provides information not obtainable with only the measurement of visual acuity, as it establishes the minimum difference in luminance that must be present between the stimulus and its background so that the retina is adequately stimulated to perceive the stimulus itself. The clinical methods of examining contrast sensitivity (lattices, luminance gradients, variable-contrast optotypic tables and lowcontrast optotypic tables) relate the two parameters on which the ability to distinctly perceive an object depends, namely the different luminance degree of the two adjacent areas and the spatial frequency, which is linked to the size of the object. The measurement of contrast sensitivity becomes valuable in the diagnosis and follow up of some important eye conditions such as glaucoma. Studies show that contrast sensitivity can be related to data obtained with the visual perimetry, especially with the perimetric damage of the central area and of the optic nerve head

    Nowcasting gentrification using Airbnb data

    Get PDF
    There is a rumbling debate over the impact of gentrification: presumed gentrifiers have been the target of protests and attacks in some cities, while they have been welcome as generators of new jobs and taxes in others. Census data fails to measure neighborhood change in real-time since it is usually updated every ten years. This work shows that Airbnb data can be used to quantify and track neighborhood changes. Specifically, we consider both structured data (e.g., number of listings, number of reviews, listing information) and unstructured data (e.g., user-generated reviews processed with natural language processing and machine learning algorithms) for three major cities, New York City (US), Los Angeles (US), and Greater London (UK). We find that Airbnb data (especially its unstructured part) appears to nowcast neighborhood gentrification, measured as changes in housing affordability and demographics. Overall, our results suggest that user-generated data from online platforms can be used to create socioeconomic indices to complement traditional measures that are less granular, not in real-time, and more costly to obtain

    Social interactions or business transactions? What customer reviews disclose about Airbnb marketplace

    Get PDF
    Airbnb is one of the most successful examples of sharing economy marketplaces. With rapid and global market penetration, understanding its attractiveness and evolving growth opportunities is key to plan business decision making. There is an ongoing debate, for example, about whether Airbnb is a hospitality service that fosters social exchanges between hosts and guests, as the sharing economy manifesto originally stated, or whether it is (or is evolving into being) a purely business transaction platform, the way hotels have traditionally operated. To answer these questions, we propose a novel market analysis approach that exploits customers’ reviews. Key to the approach is a method that combines thematic analysis and machine learning to inductively develop a custom dictionary for guests’ reviews. Based on this dictionary, we then use quantitative linguistic analysis on a corpus of 3.2 million reviews collected in 6 different cities, and illustrate how to answer a variety of market research questions, at fine levels of temporal, thematic, user and spatial granularity, such as (i) how the business vs social dichotomy is evolving over the years, (ii) what exact words within such top-level categories are evolving, (iii) whether such trends vary across different user segments and (iv) in different neighbourhoods

    Smelly maps: the digital life of urban smellscapes

    Get PDF
    Smell has a huge influence over how we perceive places. Despite its importance, smell has been crucially overlooked by urban planners and scientists alike, not least because it is difficult to record and analyze at scale. One of the authors of this paper has ventured out in the urban world and conducted ``smellwalks'' in a variety of cities: participants were exposed to a range of different smellscapes and asked to record their experiences. As a result, smell-related words have been collected and classified, creating the first dictionary for urban smell. Here we explore the possibility of using social media data to reliably map the smells of entire cities. To this end, for both Barcelona and London, we collect geo-referenced picture tags from Flickr and Instagram, and geo-referenced tweets from Twitter. We match those tags and tweets with the words in the smell dictionary. We find that smell-related words are best classified in ten categories. We also find that specific categories (e.g., industry, transport, cleaning) correlate with governmental air quality indicators, adding validity to our study

    Who benefits from the "sharing" economy of Airbnb?

    Get PDF
    Sharing economy platforms have become extremely popular in the last few years, and they have changed the way in which we commute, travel, and borrow among many other activities. Despite their popularity among consumers, such companies are poorly regulated. For example, Airbnb, one of the most successful examples of sharing economy platform, is often criticized by regulators and policy makers. While, in theory, municipalities should regulate the emergence of Airbnb through evidence-based policy making, in practice, they engage in a false dichotomy: some municipalities allow the business without imposing any regulation, while others ban it altogether. That is because there is no evidence upon which to draft policies. Here we propose to gather evidence from the Web. After crawling Airbnb data for the entire city of London, we find out where and when Airbnb listings are offered and, by matching such listing information with census and hotel data, we determine the socio-economic conditions of the areas that actually benefit from the hospitality platform. The reality is more nuanced than one would expect, and it has changed over the years. Airbnb demand and offering have changed over time, and traditional regulations have not been able to respond to those changes. That is why, finally, we rely on our data analysis to envision regulations that are responsive to real-time demands, contributing to the emerging idea of “algorithmic regulation”
    corecore